24 research outputs found
Pretty Good Strategies and Where to Find Them
Synthesis of bulletproof strategies in imperfect information scenarios is a
notoriously hard problem. In this paper, we suggest that it is sometimes a
viable alternative to aim at "reasonably good" strategies instead. This makes
sense not only when an ideal strategy cannot be found due to the complexity of
the problem, but also when no winning strategy exists at all. We propose an
algorithm for synthesis of such "pretty good" strategies. The idea is to first
generate a surely winning strategy with perfect information, and then
iteratively improve it with respect to two criteria of dominance: one based on
the amount of conflicting decisions in the strategy, and the other related to
the tightness of its outcome set. We focus on reachability goals and evaluate
the algorithm experimentally with very promising results
Towards Assume-Guarantee Verification of Strategic Ability
Formal verification of strategic abilities is a hard problem. We propose to
use the methodology of assume-guarantee reasoning in order to facilitate model
checking of alternating-time temporal logic with imperfect information and
imperfect recall
Towards Modelling and Verification of Social Explainable AI
Social Explainable AI (SAI) is a new direction in artificial intelligence
that emphasises decentralisation, transparency, social context, and focus on
the human users. SAI research is still at an early stage. Consequently, it
concentrates on delivering the intended functionalities, but largely ignores
the possibility of unwelcome behaviours due to malicious or erroneous activity.
We propose that, in order to capture the breadth of relevant aspects, one can
use models and logics of strategic ability, that have been developed in
multi-agent systems. Using the STV model checker, we take the first step
towards the formal modelling and verification of SAI environments, in
particular of their resistance to various types of attacks by compromised AI
modules
Assume-Guarantee Verification of Strategic Ability
Model checking of strategic abilities is a notoriously hard problem, even
more so in the realistic case of agents with imperfect information.
Assume-guarantee reasoning can be of great help here, providing a way to
decompose the complex problem into a small set of exponentially easier
subproblems. In this paper, we propose two schemes for assume-guarantee
verification of alternating-time temporal logic with imperfect information. We
prove the soundness of both schemes, and discuss their completeness. We
illustrate the method by examples based on known benchmarks, and show
experimental results that demonstrate the practical benefits of the approach
STV+AGR: Towards Practical Verification of Strategic Ability Using Assume-Guarantee Reasoning
We present a substantially expanded version of our tool STV for strategy
synthesis and verification of strategic abilities. The new version provides a
web interface and support for assume-guarantee verification of multi-agent
systems
Natural Strategic Abilities in Voting Protocols
Security properties are often focused on the technological side of the
system. One implicitly assumes that the users will behave in the right way to
preserve the property at hand. In real life, this cannot be taken for granted.
In particular, security mechanisms that are difficult and costly to use are
often ignored by the users, and do not really defend the system against
possible attacks.
Here, we propose a graded notion of security based on the complexity of the
user's strategic behavior. More precisely, we suggest that the level to which a
security property is satisfied can be defined in terms of (a) the
complexity of the strategy that the voter needs to execute to make
true, and (b) the resources that the user must employ on the way. The simpler
and cheaper to obtain , the higher the degree of security.
We demonstrate how the idea works in a case study based on an electronic
voting scenario. To this end, we model the vVote implementation of the \Pret
voting protocol for coercion-resistant and voter-verifiable elections. Then, we
identify "natural" strategies for the voter to obtain receipt-freeness, and
measure the voter's effort that they require. We also look at how hard it is
for the coercer to compromise the election through a randomization attack
Multi-Valued Verification of Strategic Ability
Some multi-agent scenarios call for the possibility of evaluating
specifications in a richer domain of truth values. Examples include runtime
monitoring of a temporal property over a growing prefix of an infinite path,
inconsistency analysis in distributed databases, and verification methods that
use incomplete anytime algorithms, such as bounded model checking. In this
paper, we present multi-valued alternating-time temporal logic (mv-ATL*), an
expressive logic to specify strategic abilities in multi-agent systems. It is
well known that, for branching-time logics, a general method for
model-independent translation from multi-valued to two-valued model checking
exists. We show that the method cannot be directly extended to mv-ATL*. We also
propose two ways of overcoming the problem. Firstly, we identify constraints on
formulas for which the model-independent translation can be suitably adapted.
Secondly, we present a model-dependent reduction that can be applied to all
formulas of mv-ATL*. We show that, in all cases, the complexity of verification
increases only linearly when new truth values are added to the evaluation
domain. We also consider several examples that show possible applications of
mv-ATL* and motivate its use for model checking multi-agent systems